国产bbaaaaa片,成年美女黄网站色视频免费,成年黄大片,а天堂中文最新一区二区三区,成人精品视频一区二区三区尤物

首頁(yè)> 外文OA文獻(xiàn) >Object segmentation in depth maps with one user click and a synthetically trained fully convolutional network
【2h】

Object segmentation in depth maps with one user click and a synthetically trained fully convolutional network

機(jī)譯:深度圖中的對(duì)象分割,具有一個(gè)用戶點(diǎn)擊和一個(gè)經(jīng)過綜合訓(xùn)練的完全卷積網(wǎng)絡(luò)

代理獲取
本網(wǎng)站僅為用戶提供外文OA文獻(xiàn)查詢和代理獲取服務(wù),本網(wǎng)站沒有原文。下單后我們將采用程序或人工為您竭誠(chéng)獲取高質(zhì)量的原文,但由于OA文獻(xiàn)來(lái)源多樣且變更頻繁,仍可能出現(xiàn)獲取不到、文獻(xiàn)不完整或與標(biāo)題不符等情況,如果獲取不到我們將提供退款服務(wù)。請(qǐng)知悉。

摘要

With more and more household objects built on planned obsolescence and consumed by a fast-growing population, hazardous waste recycling has become a critical challenge. Given the large variability of household waste, current recycling platforms mostly rely on human operators to analyze the scene, typically composed of many object instances piled up in bulk. Helping them by robotizing the unitary extraction is a key challenge to speed up this tedious process. Whereas supervised deep learning has proven very efficient for such object-level scene understanding, e.g., generic object detection and segmentation in everyday scenes, it however requires large sets of per-pixel labeled images, that are hardly available for numerous application contexts, including industrial robotics. We thus propose a step towards a practical interactive application for generating an object-oriented robotic grasp, requiring as inputs only one depth map of the scene and one user click on the next object to extract. More precisely, we address in this paper the middle issue of object seg-mentation in top views of piles of bulk objects given a pixel location, namely seed, provided interactively by a human operator. We propose a twofold framework for generating edge-driven instance segments. First, we repurpose a state-of-the-art fully convolutional object contour detector for seed-based instance segmentation by introducing the notion of edge-mask duality with a novel patch-free and contour-oriented loss function. Second, we train one model using only synthetic scenes, instead of manually labeled training data. Our experimental results show that considering edge-mask duality for training an encoder-decoder network, as we suggest, outperforms a state-of-the-art patch-based network in the present application context.
機(jī)譯:隨著越來(lái)越多的家庭物品過時(shí)而過時(shí)并被快速增長(zhǎng)的人口消耗,危險(xiǎn)廢物的回收已成為一項(xiàng)嚴(yán)峻的挑戰(zhàn)。考慮到生活垃圾的多變性,當(dāng)前的回收平臺(tái)主要依靠操作員來(lái)分析現(xiàn)場(chǎng),通常由大量堆積的物體實(shí)例組成。通過對(duì)單一提取進(jìn)行機(jī)械化來(lái)幫助他們,是加快這一繁瑣過程的關(guān)鍵挑戰(zhàn)。事實(shí)證明,有監(jiān)督的深度學(xué)習(xí)對(duì)于這種對(duì)象級(jí)場(chǎng)景的理解(例如,日常場(chǎng)景中的常規(guī)對(duì)象檢測(cè)和分割)非常有效,但是它需要大量的每像素標(biāo)記的圖像集,而這些圖像幾乎無(wú)法用于包括工業(yè)應(yīng)用在內(nèi)的多種應(yīng)用環(huán)境機(jī)器人技術(shù)。因此,我們提出了一種邁向?qū)嵱玫慕换ナ綉?yīng)用程序的步驟,該應(yīng)用程序用于生成面向?qū)ο蟮臋C(jī)器人抓握,僅要求將場(chǎng)景的一個(gè)深度圖作為輸入,并且一個(gè)用戶單擊要提取的下一個(gè)對(duì)象即可。更準(zhǔn)確地說(shuō),我們?cè)诒疚闹薪鉀Q了對(duì)象分割的中間問題,即在給定像素位置(即種子)的一堆散裝對(duì)象的頂視圖中,由操作員交互提供。我們提出了一個(gè)雙重框架來(lái)生成邊緣驅(qū)動(dòng)的實(shí)例段。首先,我們通過引入邊緣遮罩對(duì)偶的概念以及新穎的無(wú)補(bǔ)丁和面向輪廓的損失函數(shù),將最先進(jìn)的全卷積目標(biāo)輪廓檢測(cè)器重新用于基于種子的實(shí)例分割。其次,我們僅使用合成場(chǎng)景而不是手動(dòng)標(biāo)記的訓(xùn)練數(shù)據(jù)來(lái)訓(xùn)練一個(gè)模型。我們的實(shí)驗(yàn)結(jié)果表明,按照我們的建議,考慮到邊緣掩碼對(duì)偶性來(lái)訓(xùn)練編碼器-解碼器網(wǎng)絡(luò),在當(dāng)前應(yīng)用程序上下文中優(yōu)于基于最新補(bǔ)丁的網(wǎng)絡(luò)。

著錄項(xiàng)

相似文獻(xiàn)

  • 外文文獻(xiàn)
  • 中文文獻(xiàn)
  • 專利
代理獲取

客服郵箱:kefu@zhangqiaokeyan.com

京公網(wǎng)安備:11010802029741號(hào) ICP備案號(hào):京ICP備15016152號(hào)-6 六維聯(lián)合信息科技 (北京) 有限公司?版權(quán)所有
  • 客服微信

  • 服務(wù)號(hào)